翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

learning with errors : ウィキペディア英語版
learning with errors
Learning with errors (LWE) is a problem in machine learning that is conjectured to be hard to solve. Introduced〔 by Oded Regev in 2005, it is a generalization of the parity learning problem. Regev showed, furthermore, that the LWE problem is as hard to solve as several worst-case lattice problems. The LWE problem has recently〔Oded Regev, “On lattices, learning with errors, random linear codes, and cryptography,” in Proceedings of the thirty-seventh annual ACM symposium on Theory of computing (Baltimore, MD, USA: ACM, 2005), 84-93, http://portal.acm.org/citation.cfm?id=1060590.1060603.〕〔Chris Peikert, “Public-key cryptosystems from the worst-case shortest vector problem: extended abstract,” in Proceedings of the 41st annual ACM symposium on Theory of computing (Bethesda, MD, USA: ACM, 2009), 333-342, http://portal.acm.org/citation.cfm?id=1536414.1536461.〕 been used as a hardness assumption to create public-key cryptosystems. such as the ring learning with errors key exchange by Peikert.
An algorithm is said to solve the LWE problem if, when given access to samples (x,y) where x\in \mathbb_q^n and y \in \mathbb_q, with the assurance, for some fixed linear function f:\mathbb_q^n \rightarrow \mathbb_q, that y=f(x) with high probability and deviates from it according to some known noise model, the algorithm can recreate f or some close approximation of it with high probability.
== Definition ==
Denote by \mathbb=\mathbb/\mathbb the additive group on reals modulo one. Denote by A_ the distribution on \mathbb_q^n \times \mathbb obtained by choosing a vector \mathbf\in \mathbb_q^n uniformly at random, choosing e according to a probability distribution \phi on \mathbb and outputting (\mathbf,\langle \mathbf,\mathbf \rangle /q + e) for some fixed vector \mathbf \in \mathbb_q^n. Here \textstyle \langle \mathbf,\mathbf \rangle = \sum_^n a_i s_i is the standard inner product \mathbb_q^n \times \mathbb_q^n \longrightarrow \mathbb_q , the division is done in the field of reals (or more formally, this "division by q" is notation for the group homomorphism \mathbb_q \longrightarrow \mathbb mapping 1 \in \mathbb_q to 1/q + \mathbb \in \mathbb), and the final addition is in \mathbb.
The learning with errors problem \mathrm_ is to find \mathbf \in \mathbb_q^n, given access to polynomially many samples of choice from A_.
For every \alpha > 0, denote by D_\alpha the one-dimensional Gaussian with density function D_\alpha(x)=\rho_\alpha(x)/\alpha where \rho_\alpha(x)=e^, and let \Psi_\alpha be the distribution on \mathbb obtained by considering D_\alpha modulo one. The version of LWE considered in most of the results would be \mathrm_

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「learning with errors」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.